A kernel based adaline
نویسندگان
چکیده
By expanding a function in series form it can be represented to an arbitrary degree of accuracy by taking enough terms. It is therefore possible, in principle, to conduct a linear regression on a new set of variables, transformed by a fixed mapping. This leads to a large computational burden and to the need for an infeasible amount of data from which the coefficients must be estimated and is not generally practical for function approximation. The algorithm studied in [1] is a linear Perceptron, which is computed implicitly in a space of infinite-dimension (the linearisation space) using potential (kernel) functions. These have been further exploited in the Support Vector Machine (SVM) [2], principal component analysis [3], linear programming machines [4] and clustering [5]. The kernel-Adatron [4,6,7], provides a fast, simple, and robust alternative to SVM classifiers providing arbitrary, large-margin discriminant functions iteratively, so avoiding the intensive QP computations of the SVM. We now use kernels to develop a non-linear version of the Adaline [8], yielding a general, non-linear adaptive mapping device via an algorithm with well-documented properties. Selecting an appropriate kernel and its parameter specifies the mapping to the linearisation space, which can be done empirically, via cross validation.
منابع مشابه
Artificial Intelligent System for Measurement of Harmonic Powers
The importance of the electric power quality (PQ) demands new methodologies and measurement tools in the power industry for the analysis and measurement of the basic electric magnitudes necessary. This paper presents a new measurement procedure based on neural networks for the estimation of harmonic amplitudes of current/voltage and respective harmonic powers. The measurement scheme is built wi...
متن کاملConvergence Analysis of the Information Potential Criterion in Adaline Training
In our recent studies we have proposed the use of minimum error entropy criterion as an alternative to minimum square error (MSE) in supervised adaptive system training. We have formulated a nonparametric estimator for Renyi’s entropy with the help of Parzen windowing. This formulation revealed interesting insights about the process of information theoretical learning. We have applied this new ...
متن کاملKernel Affine Projection Algorithms
The combination of the famed kernel trick and affine projection algorithms (APA) yields powerful nonlinear extensions, named collectively here KAPA. This paper is a follow-up study of the recently introduced kernel leastmean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a u...
متن کاملConvergence properties and data efficiency of the minimum error entropy criterion in ADALINE training
Recently, we have proposed the minimum error entropy (MEE) criterion as an information theoretic alternative to the widely used mean square error criterion in supervised adaptive system training. For this purpose, we have formulated a nonparametric estimator for Renyi’s entropy that employs Parzen windowing. Mathematical investigation of the proposed entropy estimator revealed interesting insig...
متن کاملImproved Adaline Networks for Robust Pattern Classification
The Adaline network [1] is a classic neural architecture whose learning rule is the famous least mean squares (LMS) algorithm (a.k.a. delta rule or Widrow-Hoff rule). It has been demonstrated that the LMS algorithm is optimal in H∞ sense since it tolerates small (in energy) disturbances, such as measurement noise, parameter drifting and modelling errors [2,3]. Such optimality of the LMS algorit...
متن کامل